AS: Deep Learning

1 - Mise en place

  • Importation du module NNpy

In [2]:
from NNPy import *
  • Fonctions de création d'un perceptron ou d'un perceptron multicouches

NB: Un "Network" est composé d'un module et d'un loss


In [3]:
def perceptron(inDim,outDim):
    #Construction d'un Perceptron
    lm = LinearModule(inDim,outDim)
    hl = HingeLoss()
    hm = HorizontalModule([lm])
    
    #Perceptron
    return NetworkModule([hm],hl)

def multiLayerPerceptron(inDim,hidden,out):
   return NetworkModule([HorizontalModule([LinearModule(inDim,hidden),TanhModule(hidden,hidden), LinearModule(hidden,out)])],SquareLoss())
  • Test Rapide MNIST 8 vs 6

In [9]:
from DataClass import * 

print("----======  MNIST 8/6  =======----")

trainV,trainL,testV,testL = getMnistDualDataset()

print("----Perceptron----")
NBITER = 10
GD_STEP = 0.00001
network = perceptron(28*28,1)
network.trainTest(trainV,trainL,testV,testL,NBITER,GD_STEP)

print ('----multiLayerPerceptron-----')
HIDDEN = 2
GD_STEP = 0.00001
network = multiLayerPerceptron(28*28,HIDDEN,1)
network.trainTest(trainV,trainL,testV,testL,NBITER,GD_STEP)


----======  MNIST 8/6  =======----
6876 6s and 6825 8s
13701/13701 vecteurs d'apprentissage
2698/2698 vecteurs d'apprentissage apres echantillonage
1378 training examples and 1320 testing examples 
----Perceptron----
=======TRAIN ERROR=======
656 correct (47.605225%), 722 incorrect (52.394775%) 
Learning done
847 correct (61.465893%), 531 incorrect (38.534107%) 
Learning done
1070 correct (77.648766%), 308 incorrect (22.351234%) 
Learning done
1165 correct (84.542816%), 213 incorrect (15.457184%) 
Learning done
1243 correct (90.203193%), 135 incorrect (9.796807%) 
Learning done
1260 correct (91.436865%), 118 incorrect (8.563135%) 
Learning done
1287 correct (93.396226%), 91 incorrect (6.603774%) 
Learning done
1299 correct (94.267054%), 79 incorrect (5.732946%) 
Learning done
1305 correct (94.702467%), 73 incorrect (5.297533%) 
Learning done
1308 correct (94.920174%), 70 incorrect (5.079826%) 
Learning done
=======TEST ERROR=======
1236 correct (93.636364%), 84 incorrect (6.363636%) 
----multiLayerPerceptron-----
=======TRAIN ERROR=======
707 correct (51.306241%), 671 incorrect (48.693759%) 
Learning done
713 correct (51.741655%), 665 incorrect (48.258345%) 
Learning done
711 correct (51.596517%), 667 incorrect (48.403483%) 
Learning done
716 correct (51.959361%), 662 incorrect (48.040639%) 
Learning done
733 correct (53.193033%), 645 incorrect (46.806967%) 
Learning done
740 correct (53.701016%), 638 incorrect (46.298984%) 
Learning done
740 correct (53.701016%), 638 incorrect (46.298984%) 
Learning done
740 correct (53.701016%), 638 incorrect (46.298984%) 
Learning done
740 correct (53.701016%), 638 incorrect (46.298984%) 
Learning done
740 correct (53.701016%), 638 incorrect (46.298984%) 
Learning done
=======TEST ERROR=======
698 correct (52.878788%), 622 incorrect (47.121212%) 
  • MNIST complet

In [15]:
from sklearn.datasets import fetch_mldata
mnist=fetch_mldata('MNIST original')
mnist


Out[15]:
{'COL_NAMES': ['label', 'data'],
 'DESCR': 'mldata.org dataset: mnist-original',
 'data': array([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       ..., 
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]], dtype=uint8),
 'target': array([ 0.,  0.,  0., ...,  9.,  9.,  9.])}

In [22]:
HIDDEN = 50
network = multiLayerPerceptron(28*28,HIDDEN,10)
for a,b in zip(mnist["data"],mnist["target"]):
    print a

In [ ]: